Structure learning and the Occam's razor principle: a new view of human function acquisition

نویسندگان

  • Devika Narain
  • Jeroen B. J. Smeets
  • Pascal Mamassian
  • Eli Brenner
  • Robert J. van Beers
چکیده

We often encounter pairs of variables in the world whose mutual relationship can be described by a function. After training, human responses closely correspond to these functional relationships. Here we study how humans predict unobserved segments of a function that they have been trained on and we compare how human predictions differ to those made by various function-learning models in the literature. Participants' performance was best predicted by the polynomial functions that generated the observations. Further, participants were able to explicitly report the correct generating function in most cases upon a post-experiment survey. This suggests that humans can abstract functions. To understand how they do so, we modeled human learning using an hierarchical Bayesian framework organized at two levels of abstraction: function learning and parameter learning, and used it to understand the time course of participants' learning as we surreptitiously changed the generating function over time. This Bayesian model selection framework allowed us to analyze the time course of function learning and parameter learning in relative isolation. We found that participants acquired new functions as they changed and even when parameter learning was not completely accurate, the probability that the correct function was learned remained high. Most importantly, we found that humans selected the simplest-fitting function with the highest probability and that they acquired simpler functions faster than more complex ones. Both aspects of this behavior, extent and rate of selection, present evidence that human function learning obeys the Occam's razor principle.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Genetic Programming with Guaranteed Quality

When using genetic programming (GP) or other techniques that try to approximate unknown functions, the principle of Occam's razor is often applied: nd the simplest function that explains the given data, as it is assumed to be the best approximation for the unknown function. Using a well-known result from learning theory, it is shown in this paper, how Occam's razor can help GP in nding function...

متن کامل

Conditions for Occam's Razor Applicability and Noise Elimination

The Occam's razor principle suggests that among all the correct hypotheses, the simplest hypothesis is the one which best captures the structure of the problem domain and has the highest prediction accuracy when classifying new instances. This principle is implicitly used also for dealing with noise, in order to avoid overrtting a noisy training set by rule truncation or by pruning of decision ...

متن کامل

Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data: Supplementary Material

Clearly, using structural properties alone, one would generally prefer the former two models to the latter, all else being equal (e.g. generalization performance). A large number of the econometrics literature concerns the issue of identifiability of models from data. In typical machine-learning fashion, we side-step this issue by measuring the quality of our data-induced models via their gener...

متن کامل

JOSÉ HERNÁNDEZ-ORALLO and ISMAEL GARCÍA-VAREA EXPLANATORY AND CREATIVE ALTERNATIVES TO THE MDL PRINCIPLE

The Minimum Description Length (MDL) principle is the modern formalisation of Occam’s razor. It has been extensively and successfully used in machine learning (ML), especially for noisy and long sources of data. However, the MDL principle presents some paradoxes and inconveniences. After discussing all these, we address two of the most relevant: lack of explanation and lack of creativity. We pr...

متن کامل

Computational Learning Theory Fall Semester , 2010 Lecture 3 : October 31

In this lecture we will talk about the PAC model. The PAC learning model is one of the important and famous learning model. PAC stands for Probably Approximately Correct, our goal is to learn a hypothesis from a hypothesis class such that in high con dence we will have a small error rate (approximately correct). We start the lecture with an intuitive example to explain the idea behind the PAC m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2014